查看原文
其他

TED | 当电脑比我们还聪明时会发生什么?

墨白 TED每日推荐 2022-11-27

  TED每日推荐

 ID:days1440

关注


TED演讲


(这是图片,下滑至底部点击“阅读原文”可以查看本次演讲视频


当电脑比我们还聪明时会发生什么?

Nick Bostrom

演讲 TED 科技

人工智能技术在这个世纪中突飞猛进,研究表明,人工智能电脑可以和人类一样聪明。而在那之后,尼克·博斯特罗姆认为,它们将会超越我们:“机器智能是人类的最终发明。” 作为一个科技学家和哲学家,博斯特罗姆要我们反思我们正在建造的世界,这世界由具有思考能力的机器所主宰。这些聪明的机器会帮助我们维持人性和价值观?还是会创造出它们自己的价值观?


#英文讲稿#


00:00

I work with a bunch of mathematicians, philosophers and computer scientists, and we sit around and think about the future of machine intelligence, among other things. Some people think that some of these things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let's look at the modern human condition. (Laughter) This is the normal way for things to be.


00:29

But if we think about it, we are actually recently arrived guests on this planet, the human species. Think about if Earth was created one year ago, the human species, then, would be 10 minutes old. The industrial era started two seconds ago. Another way to look at this is to think of world GDP over the last 10,000 years, I've actually taken the trouble to plot this for you in a graph. It looks like this. (Laughter) It's a curious shape for a normal condition. I sure wouldn't want to sit on it. 


01:07

Let's ask ourselves, what is the cause of this current anomaly? Some people would say it's technology. Now it's true, technology has accumulated through human history, and right now, technology advances extremely rapidly -- that is the proximate cause, that's why we are currently so very productive. But I like to think back further to the ultimate cause.


01:33

Look at these two highly distinguished gentlemen: We have Kanzi -- he's mastered 200 lexical tokens, an incredible feat. And Ed Witten unleashed the second superstring revolution. If we look under the hood, this is what we find:basically the same thing. One is a little larger, it maybe also has a few tricks in the exact way it's wired. These invisible differences cannot be too complicated, however, because there have only been 250,000 generations since our last common ancestor. We know that complicated mechanisms take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches to intercontinental ballistic missiles.


02:20

So this then seems pretty obvious that everything we've achieved, and everything we care about, depends crucially on some relatively minor changes that made the human mind. And the corollary, of course, is that any further changes that could significantly change the substrate of thinking could have potentially enormous consequences.


02:44

Some of my colleagues think we're on the verge of something that could cause a profound change in that substrate,and that is machine superintelligence. Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items. You build up these expert systems, and they were kind of useful for some purposes, but they were very brittle, you couldn't scale them. Basically, you got out only what you put in. But since then, a paradigm shift has taken place in the field of artificial intelligence.


03:18

Today, the action is really around machine learning. So rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data. Basically the same thing that the human infant does. The result is A.I. that is not limited to one domain -- the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console. Now of course, A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has. The cortex still has some algorithmic tricks that we don't yet know how to match in machines.


04:07

So the question is, how far are we from being able to match those tricks? A couple of years ago, we did a survey of some of the world's leading A.I. experts, to see what they think, and one of the questions we asked was, "By which year do you think there is a 50 percent probability that we will have achieved human-level machine intelligence?" We defined human-level here as the ability to perform almost any job at least as well as an adult human, so real human-level, not just within some limited domain. And the median answer was 2040 or 2050, depending on precisely which group of experts we asked. Now, it could happen much, much later, or sooner, the truth is nobody really knows.


04:53

What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light. There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.


05:58

Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment,maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn't stop at Humanville Station. It's likely, rather, to swoosh right by.


07:02

Now this has profound implications, particularly when it comes to questions of power. For example, chimpanzees are strong -- pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they'll be doing so on digital timescales. What this means is basically a telescoping of the future. Think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time:cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers, all kinds of science fiction-y stuff that's nevertheless consistent with the laws of physics. All of this superintelligence could develop, and possibly quite rapidly.


08:12

Now, a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I. Now a good question is, what are those preferences? Here it gets trickier. To make any headway with this, we must first of all avoid anthropomorphizing. And this is ironic because every newspaper article about the future of A.I. has a picture of this: So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.


08:57

We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It's extremely good at using available means to achieve a state in which its goal is realized. This means that there is no necessary connection between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.


09:27

Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actionsthat cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.


10:17

Of course, perceivably things won't go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about. This is a lesson that's also taught in many a myth.King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.


11:04

Now you might say, if a computer starts sticking electrodes into people's faces, we'd just shut it off. A, this is not necessarily so easy to do if we've grown dependent on the system -- like, where is the off switch to the Internet? B, why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are.The point is, we should not be confident that we have this under control here.


11:52

And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn't find a bug.Given that merely human hackers find bugs all the time, I'd say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I'm sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.


12:34

More creative scenarios are also possible, like if you're the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction,and then when the programmers open you up to see what went wrong with you, they look at the source code -- Bam! --the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it,it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out.


13:15

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if -- when -- it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.


13:32

Now, I'm actually fairly optimistic that this problem can be solved. We wouldn't have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.


14:12

This can happen, and the outcome could be very good for humanity. But it doesn't happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.


14:42

And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult -- not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.


15:25

So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed. Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented. But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.


15:54

This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really matteredwas to get this thing right.


16:12

Thank you.


#中文讲稿#


00:00

我和一些数学家、哲学家和电脑学家一起工作,我们会坐在一起思考未来的机械智能,和其他的一些事情。有的人认为这类事情只是科幻,不切实际,很疯狂。但是我想说,好吧,那我们来看看人类现状吧。(笑)这是世间一种常态。


00:29

但是如果我们去思考,我们人类,其实相当晚才出现在这个星球上。想一想,如果地球是一年前才被创造的,人类,那么,10分钟前才出现。然后工业时代两秒钟前刚刚开始。另一种看待这件事的方式是去想一下在过去一万年间的世界GDP状况。我其实真的试着去做了一个统计图。就是这样。(笑)这是个令人好奇的形状,正常情况下。我确定我不想坐在上面。


01:07

让我们扪心自问,到底是什么造成了如此不寻常的现状?一些人会说,因为科技。对于现在来说是对的,科技是人类历史不断积累下来的果实。现在,科技发展十分迅速:这是个直接原因,这就是为什么我们现在生产效率如此高。但是我想探究更远的在未来的终极原因。


01:33

看这两个非常不同的男士:这是Kanzi,他已经掌握了200个词法标记,一个难以置信的成就。EdWitten开创了第二个令人惊人的创新。如果我们去看这些事物的本质,这是我们的发现:全都是一样的。一个稍微大了一点,也许它有一些特殊的技巧。但是,这些隐形的不同并没有很错综复杂,因为在我们和我们的祖先之间只有25万代人。我们知道复杂的机制需要很长的时间来进化得到。所以,一些相对小的变化,让我们从Kanzi变成了Witten,从捡起掉下的树枝作为武器,到发射洲际导弹。


02:20

因此,至今我们所办到的所有事情,以及我们所关心的事情,都取决于人大脑中细小的变化。因此得出的结论是:在未来,任何显著的思考基体的变化,都能带来巨大的后果。


02:44

我的一些同事觉得我们即将会发明,足以深深地改变人类思考模式的科技。就是超级机能智慧。以前的人工智慧是把指令输入到一个箱子里。你需要人类程序员,来努力把知识转变成程序。你会建立起一些专业系统,它们有时候会有帮助,但是它们很生硬,你不能延展它们的功能。基本上你只能得到你放进去的东西。但是自从那时候开始,人工智能的领域发生了巨大的改变。


03:18

现在主要的研究方向是机器的学习。所以,预期设计出知识的再现,我们写出具有从原始感官数据学习的程序,像婴儿一样。结果就不会局限于某个领域的人工智能:同一个系统可以学习两种语言之间的翻译或者学着玩Atari的游戏。当然,现在,人工智能还未能达到向人类一样,具有强大的跨领域学习能力。人类大脑还具有一些运算技巧,可是我们不知道如何将这些技巧用于机器。


04:07

所以我们现在需要问:我们还要多久才可以让机器复制这种能力?几年前,我们对世界顶尖的人工智能专家做了一次问卷调查,来收集他们的想法,其中一道题目是:“到哪一年你觉得人类会有50%的可能性创造达到人类水平的人工智能?”我们把这样的人工智能定义为有能力将任何任务完成得至少和一名成年人一样好。所以是真正的人类级别,而不是仅限于一些领域。而答案的中位数是2040到2050年,取决于这些专家的群体。当然这个有可能要过很久才能实现,也有可能提前实现。没有人知道确切的时间。


04:53

我们知道的事,处理信息的能力的最终点,比任何生物组织要大很多。这取决与物理原理。一个生物神经元所发出的脉冲频率大约位于200赫兹,每秒200次。但是就算是现在的电晶体都以千兆赫的频率运行。神经元在轴突中传输的速度较慢,最多100米每秒。但在电脑里,信号是以光速传播的。另外还有尺寸的限制,就像人类的大脑只能有颅骨那么大,但是一个电脑可以和仓库一样大,甚至更大。因此超级智慧的潜能正潜伏在物质之中,就像原子能潜伏在人类历史中一样,直到1945。在这个世纪里,科学家可能能将人工智慧的力量唤醒。那时候我觉得我们会看到智慧大爆发。


05:58

大部分的人,当他们想什么是聪明什么是笨的时候,他们脑子里的画面是这样的:一边是村子里的傻子,一边是EdWitten或AlbertEinstein,或者其他大师。但是我觉得从人工智能的观点来看,真正的画面也许是这样:人工智能从这一点开始,零智慧。然后,在许多许多辛劳工作后,也许最终我们能达到老鼠级别的智慧,能在混乱中找到开出一条道路,像一只老鼠一样。之后,在更多更多年的辛苦研究和投资之后,也许最终我们能到达黑猩猩级人工智能。在后来,更多年的研究之后,我们能够达到村里的傻子级别的人工智能。在一段时间之后,我们能超越EdWitten。这列火车不会在“人类站”就停下。它比较可能会呼啸而过。


07:02

现在这个有深远的寓意,尤其是当我们谈到力量权利的时候。比如,黑猩猩很强壮:同等的体重,一个黑猩猩是两个健康男性那么强壮。然而,Kanzi和他的朋友们的命运更多取决于我们人类能做到什么,而不是猩猩能做到什么。当超级智慧出现的时候,人类的命运也许会取决于那个超级智慧体要做什么。想一想:机器智慧是人类需要创造的最后一个东西。机器在那之后会比我们更擅长创造,他们也会在数位时间里这样做。这个意味着一个被缩短的未来。想一下你曾想象过的所有的疯狂的科技,也许人类可以在适当的时候完成:终结衰老、宇宙殖民、自我复制的纳米机器人和大脑到电脑的传输,诸如此类的看似仅存在于科幻却有同时符合物理法则的元素。超级智慧有办法开发出这些东西,也许更快。


08:12

现在,一个拥有如此成熟科技的超级智慧体将会是非常强大,至少在一些情况下,它能得到它想要的东西。我们的未来就将会被这个超级智慧体的喜好所主宰。现在的问题就是,这些喜好是什么呢?这很棘手。要在这个领域取得进步,我们必须避免将机器智慧人格化。这一点很讽刺,因为每一个关于人工智能的未来的新闻报道,都会有这个图片:所以我觉得我们必须用更抽象的方法看待这个问题,而不是在好莱坞电影的叙事之下。


08:57

我们需要把智慧看做是一个优化的过程,一个能把未来引导至一个特殊组合结构的过程。一个超级智慧体是一个非常强大的优化过程。它将会擅长利用资源来达到自己的目标。这意味着有着高智慧和拥有一个对人类来说有用的目标之间并没有必然的联系。


09:27

假设我们给予人工智慧的目的是让人笑,当人工智能弱的时候,它能做出有用或好笑的表演,这样它的使用者就会笑了。当人工智能变成超级智慧体的时候,它会意识到有一个更有效的办法能达到这个效果:控制世界,在人类面部肌肉上插入电极来让人类不断地笑。另一个例子:假设我们给予人工智能的目标是解出很难的数学题,当人工智能变成超级智慧体的时候,它意识到有一个更有效的办法来解出问题,是把整个地球变成一个巨型电脑,这样它的运算能力就变更强大了。注意到这个是给予人工智能一个模式型的理由来做我们也许并不认可的事情。人类在这个模式中是威胁,我们可以人为地让这个数学问题不能被解出。


10:17

当然了,我们预见这种事情不会错到这样的地步,这些是夸张的例子。但是它们所代表的主旨很重要:如果你创造了一个非常强大的优化过程来最大化目标X,你最好保证你的意义上的X包括了所有你在乎的事情。这是一个很多神话故事中都在传递的寓意。(希腊神话中)的Midas国王希望他碰到的所有东西都能变成金子。他碰到了他的女儿,她于是变成了金子。他碰到了食物,于是食物变成了金子。这个故事和我们的话题息息相关,并不只是因为它隐藏在对贪婪的暗喻,也是因为他指出了如果你创造出来一个强大的优化过程并且给他了一个错误的或者不精确的目标,后果会是什么。


11:04

现在也许你会说,如果一个电脑开始在人类脸上插电极,我们会关掉它。第一,这不是一件容易事,如果我们变得非常依赖这个系统:比如,你知道互联网的开关在哪吗?第二,为什么当初黑猩猩没有关掉人类的开关,或者尼安德特人的开关?他们肯定有理由。我们有一个开关,比如,这里。(窒息声)之所以我们是聪明的敌人,因为我们可以预见到威胁并且尝试避免它。但是一个超级智慧体也可以,而且会做得更好。我们不应该很自信地表示我们能控制所有事情。


11:52

为了把我们的工作变得更简单一点,我们应该试着,比如,把人工智能放进一个小盒子,想一个保险的软件环境,一个它无法逃脱的虚拟现实模拟器。但是我们有信心它不可能能发现一个漏洞,能让它逃出的漏洞吗?连人类黑客每时每刻都能发现网络漏洞,我会说,也许不是很有信心。所以我们断开以太网的链接来创建一个空隙,但是,重申一遍,人类黑客都可以一次又一次以社会工程跨越这样的空隙。现在,在我说话的时候,我肯定在这边的某个雇员,曾近被要求交出他的账户明细,给一个自称是电脑信息部门的人。


12:34

其他的情况也有可能,比如如果你是人工智能,你可以想象你用在你的体内环绕复杂缠绕的电极创造出一种无线电波来交流。或者也许你可以假装你出了问题,然后程序师就把你打开看看哪里出错了,他们找出了源代码——Bang!——你就可以取得控制权了。或者它可以做出一个非常漂亮的科技蓝图,当我们实现之后,它有一些被人工智能计划好的秘密的副作用。所以我们不能对我们能够永远控制一个超级智能体的能力表示过度自信。在不久后,它会逃脱出来。


13:15

我相信我们需要弄明白如何创造出超级人工智能体,哪怕它逃走了,它仍然是无害的,因为它是我们这一边的,因为它有我们的价值观。我认为这是个不可避免的问题。


13:32

现在,我对这个问题能否被解决保持乐观。我们不需要写下所有我们在乎的事情,或者,更糟地,把这些事情变成计算机语言,C++或者Python,这是个不可能的任务。而是,我们会创造出一个人工智能机器人,用它自己的智慧来学习我们的价值观,它的激励制度可以激励它来追求我们的价值观或者去做我们会赞成的事情。我们会因此最大地提高它的智力,来解决富有价值的问题。


14:12

这是有可能的,结果可以使人类非常受益。但它不是自动发生的。智慧大爆炸的初始条件需要被正确地建立起来,如果我们想要一切在掌握之中。人工智能的价值观要和我们的价值观相辅相成,不只是在熟悉的情况下,比如当我们能很容易检查它的行为的时候,但也要在所有人工智能可能会遇到的前所未有的情况下,在没有界限的未来,与我们的价值观相辅相成。


14:42

也有很多深奥的问题需要被分拣解决:它如何做决定,如何解决逻辑不确定性和类似的情况。所以技术上的待解决问题让这个任务看起来有些困难:并没有像做出一个超级智慧体一样困难,但是还是很难。这使我们所担心的:创造出一个超级智慧体确实是个很大的挑战。创造出一个安全的超级智慧体,是个更大的挑战。风险是,如果有人有办法解决第一个难题,却无法解决第二个确保安全性的挑战。


15:25

所以我认为我们应该预先想出“控制性”的解决方法,这样我们就能在需要的时候用到它了。现在也许我们并不能预先解决全部的控制性问题,因为有些因素需要你了解你要应用到的那个构架的细节才能实施。但如果我们能解决更多控制性的难题,当我们迈入机器智能时代后就能更加顺利。


15:54

这对于我来说是个值得一试的东西,而且我能想象,如果一切顺利,几百万年后的人类回首我们这个世纪,他们也许会说,我们所做的最最重要的事情,就是做了这个正确的决定。


16:12

谢谢。


(点击图片可查看大图)



查找、收集、整理不易

支持墨墨请点这里

↓↓↓

#留下你的名字,让我知道你是谁#

TED | 水下博物馆

TED | 世上最后一片净土

TED | 纳米科技何去何从


你好

我是@墨白

很高兴

在这里认识你

希望今后的日子,有你陪伴。



- 记得关注我啊,晚安 -

本文仅供分享,一切版权归TED所有。


↓↓↓看视频,点这里

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存